When facts aren’t enough


An illustration of a student at a desk on a yellow background with document icons to the right.

Developed by researchers from the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering at Arizona State University, and ASU’s Center for Strategic Communication, Skeptik is a browser-based tool powered by artificial intelligence that identifies and explains logical fallacies in online news articles. Graphic by Andrea Heser/ASU

|

In the age of viral headlines and endless scrolling, misinformation travels faster than the truth. Even careful readers can be swayed by stories that sound factual but twist logic in subtle ways that quietly distort reality while never quite crossing the line into a lie.

That’s where Skeptik comes in.

Developed by a cross-disciplinary team from Arizona State University, Skeptik is a new browser-based tool designed to help readers recognize these hidden flaws. The system — created by researchers from the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering, and ASU's Center for Strategic Communication — uses large language models, similar to those that power modern artificial intelligence, or AI, chatbots and combines these models with human communication theory to automatically identify and explain logical fallacies in online news articles.

“Our goal isn’t to tell people what to think,” says Fan Lei, a researcher who led the project until he received his computer science doctoral degree from the Fulton Schools in 2025. “It’s to help them see how an argument is built, where it’s solid and where it might be taking shortcuts. We want to empower readers to think critically, not passively consume information.”

Seeing through the spin

Traditional fact-checking can verify whether a claim is true, but it often misses the deeper structure of persuasion and the rhetorical sleights of hand that make falsehoods seem reasonable. The Skeptik framework fills that gap by scanning news text for logical inconsistencies and marking suspect sentences directly within the article.

Readers can then click to reveal brief explanations, external evidence and even initiate a live chat with an AI model that provides deeper clarification. In the system’s prototype, each fallacy type is color-coded and linked to an interactive sidebar. A vague statement, for instance, might appear underlined in purple, while a red line could flag a strawman argument. Hovering reveals a short explanation and clicking opens multilayered “intervention” panels that guide readers through progressively deeper insights.

The first layer offers a simple clarification, explaining why the reasoning may be misleading. The second layer provides supporting evidence and counterarguments to help readers evaluate the claim more critically. The third layer offers proactive context, anticipating similar misinformation patterns before the reader encounters them again.

“People don’t always fall for misinformation because they’re careless,” Lei says. “They fall for it because persuasive writing often feels logical. We wanted to give readers a way to pause and ask, ‘Does this conclusion really follow from the evidence?’”

Ross Maciejewski speaks at a podium.
Ross Maciejewski, director of the School of Computing and Augmented Intelligence, part of the Fulton Schools, receives the 2025 Institute of Electrical and Electronics Engineers Visualization and Graphics Technical Community Service Award onstage at the VIS 2025 Conference in Vienna, Austria, for extraordinary leadership in the field of data visualization. As part of his ongoing research, Maciejewski leads projects such as Skeptik, which harnesses the power of computer graphics for the public good. Photo courtesy of Ross Maciejewski/ASU

AI that helps us think for ourselves

The project draws its conceptual roots from inoculation theory, a communication framework suggesting that exposure to small doses of misinformation paired with explanations of why they’re wrong can build resistance to larger falsehoods later. That insight came from co-author Steve Corman, a professor emeritus in the Hugh Downs School of Human Communication and longtime expert in strategic communication and narrative analysis.

The research team designed Skeptik’s AI to work like a conversational assistant, not an all-knowing judge. When it flags a potential fallacy, it phrases the finding cautiously, displaying messages like, “This may be an example of X," leaving the final judgment to the reader.

That balance between automation and human interpretation reflects one of Ross Maciejewski’s key leadership priorities. Maciejewski is a professor of computer science and engineering as well as director of the School of Computing and Augmented Intelligence. He oversaw the student work for Skeptik and encouraged them to design systems that amplify human reasoning rather than replace it.

“Good visualization and AI tools should help people think more clearly, not take the thinking away,” Maciejewski says. “Skeptik embodies that philosophy. It’s a system built to foster critical awareness.”

Maciejewski, a national leader in data visualization and human-centered AI, has long championed projects that bridge computation and social impact. During his tenure, he has expanded collaborations to tackle challenges at the intersection of technology, policy and public communication.

Fan Lei speaks next to a posted on an easel.
Fan Lei presents Skeptik at an academic event. Now a postdoctoral scholar at the University of Waterloo in Ontario, Canada, Lei earned his doctoral degree in computer science from the Fulton Schools in 2025, working under Maciejewski’s supervision on data visualization research projects. Photo courtesy of Fan Lei

Bridging code and conversation

That interdisciplinary spirit is central to Skeptik’s success. While the engineering team built the computer science framework for smooth browser performance, the communication scholars helped ensure that the explanations actually made sense to everyday readers.

“Working with Professor Corman was eye-opening,” says Lei. “He helped us translate technical detection results into meaningful guidance that resonates with how people read, argue and form opinions. It’s a true human-AI collaboration.”

The system currently detects nine common fallacy types, including cherry-picking, false cause and red herring, and then links them visually to the text. In a series of case studies, Skeptik successfully identified misleading reasoning in articles about climate change, election processes and public health, often providing context that clarified complex issues without sensationalism.

When tested against the Ad Fontes Media dataset, which rates thousands of news outlets on bias and reliability, Skeptik revealed a striking trend: The more biased a source, the more logical fallacies its articles contained. The findings suggest that fallacy detection could one day complement existing fact-checking metrics, offering a more nuanced measure of news credibility.

“There’s a real need for tools like Skeptik, ones that don’t tell people what to think but how to think about media content more critically,” Corman says. “We intentionally designed it not to say, ‘This statement is wrong,’ but rather, ‘Here’s why you should think about this statement more carefully.’”

A future of smarter reading

For the researchers, Skeptik is less about policing and more about cultivating intellectual curiosity.

“In an age of information overload, it’s easy to get cynical,” says Lei, who is now a postdoctoral scholar at the University of Waterloo in Ontario, Canada. “But if we can build tools that make critical reading more interactive, and even a little fun, we can restore some of that trust between journalists and audiences.”

The team envisions Skeptik as an open, evolving framework. Future versions could integrate visualizations to show how an article’s logical structure flows, or allow crowdsourced annotations where readers collaborate to refine fallacy detection. The researchers also hope to adapt the system for classrooms, where students can learn to analyze bias and rhetoric through hands-on engagement.

For Maciejewski, this project reflects the growing role of AI in addressing societal challenges.

“This kind of research captures the best of what the Fulton Schools stands for,” he says. “It’s technically sophisticated but also deeply human. We’re building technology that doesn’t just process information. It helps people make sense of it.”

More Science and technology

 

Professor Yohannes Haile-Selassie and his crew at one of the picking operations following a hominin discovery at Woranso-Mille. Photo by Dale Omori.

New research by ASU paleoanthropologists: 2 ancient human ancestors were neighbors

In 2009, scientists found eight bones from the foot of an ancient human ancestor within layers of million-year-old sediment in the Afar Rift in Ethiopia. The team, led by Arizona State University…

Holotype skull of the extinct mud terrapin species Pelusios awashi graphic by Brenton Adrian.

Scientists discover new turtle that lived alongside 'Lucy' species

Shell pieces and a rare skull of a 3-million-year-old freshwater turtle are providing scientists at Arizona State University with new insight into what the environment was like when Australopithecus…

Scientists in white lab coats gather around a light table where one is drawing.

ASU named one of the world’s top universities for interdisciplinary science

Arizona State University has an ambitious goal: to become the world’s leading global center for interdisciplinary research, discovery and development by 2030.This week, the university moved…